105 research outputs found

    EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for Hand Gestures Recognition

    Full text link
    Myoelectric control is an area of electromyography of increasing interest nowadays, particularly in applications such as Hand Gesture Recognition (HGR) for bionic prostheses. Today's focus is on pattern recognition using Machine Learning and, more recently, Deep Learning methods. Despite achieving good results on sparse sEMG signals, the latter models typically require large datasets and training times. Furthermore, due to the nature of stochastic sEMG signals, traditional models fail to generalize samples for atypical or noisy values. In this paper, we propose the design of a Vision Transformer (ViT) based architecture with a Fuzzy Neural Block (FNB) called EMGTFNet to perform Hand Gesture Recognition from surface electromyography (sEMG) signals. The proposed EMGTFNet architecture can accurately classify a variety of hand gestures without any need for data augmentation techniques, transfer learning or a significant increase in the number of parameters in the network. The accuracy of the proposed model is tested using the publicly available NinaPro database consisting of 49 different hand gestures. Experiments yield an average test accuracy of 83.57\% \& 3.5\% using a 200 ms window size and only 56,793 trainable parameters. Our results outperform the ViT without FNB, thus demonstrating that including FNB improves its performance. Our proposal framework EMGTFNet reported the significant potential for its practical application for prosthetic control

    A Self-Adaptive Online Brain Machine Interface of a Humanoid Robot through a General Type-2 Fuzzy Inference System

    Get PDF
    This paper presents a self-adaptive general type-2 fuzzy inference system (GT2 FIS) for online motor imagery (MI) decoding to build a brain-machine interface (BMI) and navigate a bi-pedal humanoid robot in a real experiment, using EEG brain recordings only. GT2 FISs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) maximum number of electroencephalography (EEG) channels is limited and fixed, 2) no possibility of performing repeated user training sessions, and 3) desirable use of unsupervised and low complexity features extraction methods. The novel learning method presented in this paper consists of a self-adaptive GT2 FIS that can both incrementally update its parameters and evolve (a.k.a. self-adapt) its structure via creation, fusion and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath-Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models). The effectiveness of the proposed method is demonstrated in a detailed BMI experiment where 15 untrained users were able to accurately interface with a humanoid robot, in a single thirty-minute experiment, using signals from six EEG electrodes only

    Personalised and Adjustable Interval Type-2 Fuzzy-Based PPG Quality Assessment for the Edge

    Full text link
    Most of today's wearable technology provides seamless cardiac activity monitoring. Specifically, the vast majority employ Photoplethysmography (PPG) sensors to acquire blood volume pulse information, which is further analysed to extract useful and physiologically related features. Nevertheless, PPG-based signal reliability presents different challenges that strongly affect such data processing. This is mainly related to the fact of PPG morphological wave distortion due to motion artefacts, which can lead to erroneous interpretation of the extracted cardiac-related features. On this basis, in this paper, we propose a novel personalised and adjustable Interval Type-2 Fuzzy Logic System (IT2FLS) for assessing the quality of PPG signals. The proposed system employs a personalised approach to adapt the IT2FLS parameters to the unique characteristics of each individual's PPG signals.Additionally, the system provides adjustable levels of personalisation, allowing healthcare providers to adjust the system to meet specific requirements for different applications. The proposed system obtained up to 93.72\% for average accuracy during validation. The presented system has the potential to enable ultra-low complexity and real-time PPG quality assessment, improving the accuracy and reliability of PPG-based health monitoring systems at the edge

    Clinical Brain-Computer Interface Challenge 2020 (CBCIC at WCCI2020): Overview, methods and results

    Get PDF
    In the field of brain-computer interface (BCI) research, the availability of high-quality open-access datasets is essential to benchmark the performance of emerging algorithms. The existing open-access datasets from past competitions mostly deal with healthy individuals’ data, while the major application area of BCI is in the clinical domain. Thus the newly proposed algorithms to enhance the performance of BCI technology are very often tested against the healthy subjects’ datasets only, which doesn’t guarantee their success on patients’ datasets which are more challenging due to the presence of more nonstationarity and altered neurodynamics. In order to partially mitigate this scarcity, Clinical BCI Challenge aimed to provide an open-access rich dataset of stroke patients recorded similar to a neurorehabilitation paradigm. Another key feature of this challenge is that unlike many competitions in the past, it was designed for algorithms in both with-in subject and cross-subject categories as a major thrust area of current BCI technology is to realize calibration-free BCI designs. In this paper, we have discussed the winning algorithms and their performances across both competition categories which may help develop advanced algorithms for reliable BCIs for real-world practical applications

    Single-Trial Recognition of Video Gamer’s Expertise from Brain Haemodynamic and Facial Emotion Responses

    Get PDF
    With an increase in consumer demand of video gaming entertainment, the game industry is exploring novel ways of game interaction such as providing direct interfaces between the game and the gamers’ cognitive or affective responses. In this work, gamer’s brain activity has been imaged using functional near infrared spectroscopy (fNIRS) whilst they watch video of a video game (League of Legends) they play. A video of the face of the participants is also recorded for each of a total of 15 trials where a trial is defined as watching a gameplay video. From the data collected, i.e., gamer’s fNIRS data in combination with emotional state estimation from gamer’s facial expressions, the expertise level of the gamers has been decoded per trial in a multi-modal framework comprising of unsupervised deep feature learning and classification by state-of-the-art models. The best tri-class classification accuracy is obtained using a cascade of random convolutional kernel transform (ROCKET) feature extraction method and deep classifier at 91.44%. This is the first work that aims at decoding expertise level of gamers using non-restrictive and portable technologies for brain imaging, and emotional state recognition derived from gamers’ facial expressions. This work has profound implications for novel designs of future human interactions with video games and brain-controlled games

    A Perceptual Computing Approach for Learning Interpretable Unsupervised Fuzzy Scoring Systems

    Get PDF
    Scoring the driver’s behavior through the analysis of his/ her road trip data is an active area of research. However, such systems suffer from a lack of explainability, integration of expert bias in the calculated score, and ignoring the semantic uncer- tainty of various variables contributing to the score. To overcome these limitations, we have proposed a novel perceptual computing based unsupervised scoring system. The prowess of the proposed system has been exemplified in a case study of driver’s scoring from telemetry data. Our proposed approach yields scores that showed a higher significant separability between drivers performing responsible or irresponsible (aggressive or drowsy) driving behaviours, than the formal method of computing these scores (a p value of 3.94 × 10¯⁴ and 3.42 × 10¯³, respectively, in a Kolmogorov-Smirnov test). Further, the proposed method displayed higher robustness in the bootstrap test (where 30% of original data was omitted at random) by providing scores that were 90% similar to the original ones for all results within a confidence interval of 95%

    A Gentle Introduction and Survey on Computing with Words (CWW) Methodologies

    Get PDF
    Human beings have an inherent capability to use linguistic information (LI) seamlessly even though it is vague and imprecise. Computing with Words (CWW) was proposed to impart computing systems with this capability of human beings. The interest in the field of CWW is evident from a number of publications on various CWW methodologies. These methodologies use different ways to model the semantics of the LI. However, to the best of our knowledge, the literature on these methodologies is mostly scattered and does not give an interested researcher a comprehensive but gentle guide about the notion and utility of these methodologies. Hence, to introduce the foundations and state-of-the-art CWW methodologies, we provide a concise but a wide-ranging coverage of them in a simple and easy to understand manner. We feel that the simplicity with which we give a high-quality review and introduction to the CWW methodologies is very useful for investigators or especially those embarking on the use of CWW for the first time. We also provide future research directions to build upon for the interested and motivated researchers
    • …
    corecore